Module - 1: Basics of Measurement and Instrumentation
On completion of this module, you will be able to:
- Describe different performance characteristics of an instrument
- Identify different type of errors in measurement (with calculation)
- List the types of errors present in the measurement
- Explain the importance of calibraion
Introduction of Electronic Instruments
- An electronic instrument is the one which is based on electronic or electrical principles for its measurement function.
- The measurement of any electronic or electrical quantity or variable is termed as an electronic measurement.
- Advantages of Electronic Measurement The advantages of an electronic measurement are :-
1. Most of the quantities can be converted by transducers into the electrical or electronic signals.
2. An electrical or electronic signal can be amplified, filtered, multiplexed, sampled and measured.
3. The measurement can easily be obtained in or converted into digital form for automatic analysis and recording.
4. The measured signals can be transmitted over long distances with the help of cables or radio links, without any loss of information.
5. Many measurements can be carried either simultaneously or in rapid succession.
6. Electronic circuits can detect and amplify very weak signals and can measure the events of very short duration as well.
7. Electronic measurement makes possible to build analog and digital signals. The digital signals are very much required in computers.
8. Higher sensitivity, low power consumption and a higher degree of reliability are the important features of electronic instruments and measurements.
- The characteristics of measurement instruments which are helpful to know the performance of instrument and help in measuring any quantity or parameter are known as Performance Characteristics.
Types of Performance Characteristics
- Performance characteristics of instruments can be classified into the following two types.
1. Static Characteristics
2. Dynamic Characteristics
Static Characteristics
- The characteristics of quantities or parameters measuring instruments that do not vary with respect to time are called static characteristics.
- Sometimes, these quantities or parameters may vary slowly with respect to time. Following are the list of static characteristics:-
• Accuracy
• Precision
• Sensitivity
• Resolution
• Static Error
• Linearity
• Repeatability
Accuracy
- The accuracy of an instrument is a measure of how close the output reading of the instrument is to the correct value.
- The algebraic difference between the indicated value of an instrument, Ai and the true value, At is known as accuracy. Mathematically, it can be represented as : -
Accuracy = Ai−At
- The term, accuracy signifies how much the indicated value of an instrument, Ai is closer to the true value, At.
- The difference between the true value and the indicated value of an instrument is known as static error. Mathematically, it can be represented as : −
Es = At−Ai
- The term, static error signifies the inaccuracy of the instrument. If the static error is represented in terms of percentage, then it is called percentage of static error. Mathematically, it can be represented as: −

Precision
- If an instrument indicates the same value repeatedly when it is used to measure the same quantity under same circumstances for any number of times, then we can say that the instrument has high precision
Sensitivity
- The ratio of change in output, ΔAout of an instrument for a given change in the input, ΔAin that is to be measured is called sensitivity, S. Mathematically it can be represented as :–
- The term sensitivity signifies the smallest change in the measurable input that is required for an instrument to respond.
- If the calibration curve is linear, then the sensitivity of the instrument will be a constant and it is equal to slope of the calibration curve.
- If the calibration curve is non-linear, then the sensitivity of the instrument will not be a constant and it will vary with respect to the input
Resolution
- If the output of an instrument will change only when there is a specific increment of the input, then that increment of the input is called Resolution.
- That means, the instrument is capable of measuring the input effectively, when there is a resolution of the input.
Repeatability
- A measure of how well the output returns to a given value when the same precise input is applied several times. Or The ability of an instrument to reproduce a certain set of reading within a given accuracy.
Range or span
- The range or span of an instrument defines the minimum and maximum values of a quantity that the instrument is designed to measure.
- For example, for a temperature measuring instrument the input range may be 100-500 o C and the output range may be 4-20 mA.
Linearity
- Linearity is actually a measure of nonlinearity of the instrument. When we talk about sensitivity, we assume that the input/output characteristic of the instrument to be approximately linear.
- But in practice, it is normally nonlinear, as shown in Fig.1. The linearity is defined as the maximum deviation from the linear characteristics as a percentage of the full scale output. Thus:–
Dynamic Characteristics
- The characteristics of the instruments, which are used to measure the quantities or parameters that vary very quickly with respect to time are called dynamic characteristics.
- Following are the list of dynamic characteristics:
- Speed of Response
- Dynamic Error
- Fidelity
- Lag
Speed of Response
- The speed at which the instrument responds whenever there is any change in the quantity to be measured is called speed of response. It indicates how fast the instrument is.
Lag
- The amount of delay present in the response of an instrument whenever there is a change in the quantity to be measured is called measuring lag. It is also simply called lag.
Dynamic Error
- The difference between the true value, At of the quantity that varies with respect to time and the indicated value of an instrument, Ai is known as dynamic error, ed.
Fidelity
- The degree to which an instrument indicates changes in the measured quantity without any dynamic error is known as Fidelity
Types of error
- The static error is defined earlier as the difference between the true value of the variable and the value indicated by the instrument. The static error may arise due to number of reasons.
- The static errors are classified as: -
- Gross errors
- Systematic errors
- Random errors
•
•
•
•
•
Gross errors
- The gross errors mainly occur due to carelessness or lack of experience of a human being. These cover human mistakes in readings, recordings and calculating results.
- These errors also occur due to incorrect adjustments of instruments. These errors cannot be treated mathematically.
- These errors are also called personal errors. Some gross errors are easily detected while others are very difficult to detect.
Systematic errors
- The systematic errors are mainly resulting due to the shortcomings of the instrument and the characteristics of the material used in the instrument, such as defective or worn parts, ageing effects, environmental effects, etc.
- A constant uniform deviation of the operation of an instrument is known as a systematic error. There are three types of systematic errors as
- Instrumental errors
- Environmental errors
- Observational errors
Random errors
- Some errors still result, though the systematic and instrumental errors are reduced or at least accounted for.
- The causes of such errors are unknown and hence, the errors are called random errors.
- These errors cannot be determined in the ordinary process of taking the measurements
Absolute and relative errors
- When the error is specified in terms of an absolute quantity and not as a percentage, then it is called an absolute error. Thus the voltage of 10 ± 0.5 V indicated ± 0.5 V as an absolute error.
- When the error is expressed as a percentage or as a fraction of the total quantity to be measured, then it is called relative error..
Loading effect
- While selecting a meter for a particular measurement, the sensitivity rating IS very important. A low sensitive meter may give the accurate reading in low resistance circuit but will produce totally inaccurate reading in high resistance circuit.
- The voltmeter is always connected across the two points between which the potential difference is to be measured.
- If it is connected across a low resistance then as voltmeter resistance is high, most of the current will pass through a low resistance and will produce the voltage drop which will be nothing but the true reading.
- But if the voltmeter is connected across the high resistance then due to two high resistances in parallel, the current will divide almost equally through the two paths. Thus the meter will record the voltage drop across the high resistance which will be much lower than the true reading.
- Thus, the low sensitivity instrument when used in high resistance circuit 'gives a lower than the true reading. This is called loading effect of the voltmeters. It is mainly cc1l1sed due to low sensitivity instruments.
Calibration
- Instrument calibration is one of the primary processes used to maintain instrument accuracy. Calibration is the process of configuring an instrument to provide a result for a sample within an acceptable range.
- So, Calibration is the process of adjusting and verifying the accuracy of a measuring instrument or system, such as an electronic device or sensor, to ensure that it provides the correct readings or outputs within the specified tolerance levels.
- If any variation is found, then the instrument is calibrated so that it can give exact reading and values. It is common for any instrument to lose its calibration after a long period of usage.
- After the process of calibration, the instrument is good to use again.
Need of calibration
- A crucial measurement
- If the instrument has undergone adverse condition and cannot give the right reading
- When the output does not match the standard instrument
- Drastic change in weather
- Cyclic testing of instruments
Advantages of Calibration
- Calibration is proof that the instrument is working properly.
- Increases the confidence of instrument user.
- Calibration fulfils the requirement of traceability.
- Increases power saving and cost saving.
- Reduced rejection and failure rate, hence gives higher productivity.
- Interchangeability.
- The improved product quality and service quality leading to satisfied customers.
- Increases safety.
Calibration Process
- Calibration Process: The calibration process in electronics generally involves the following steps:
- Preparation: This step involves ensuring that the device to be calibrated is properly cleaned and in good working condition and that all necessary tools and reference standards are available.
- Connection: The device to be calibrated is connected to the reference standard and any necessary test equipment is set up.
- Measurement: The device is then measured using the reference standard, and the readings are compared to the known values of the reference standard..
- Adjustment: If necessary, the device is adjusted to bring its readings into alignment with the reference standard. This may involve adjusting internal electronics or physical components or making changes to the device’s software or firmware.
- Documentation: The results of the calibration are documented, including the readings of the device before and after calibration, the reference standard used, and any adjustments made to the device.
- Verification: The device is then re-measured to verify that it is providing accurate and consistent readings, and to ensure that the calibration process was successful.
- Repeat: If necessary, the calibration process may be repeated several times to ensure that the device provides accurate readings.
- This is a general overview of the calibration process. It may vary depending on the type of device being calibrated and the level of accuracy required for the application.